eu aia
Design of a Quality Management System based on the EU Artificial Intelligence Act
Mustroph, Henryk, Rinderle-Ma, Stefanie
The Artificial Intelligence Act of the European Union mandates that providers and deployers of high-risk AI systems establish a quality management system (QMS). Among other criteria, a QMS shall help to i) identify, analyze, evaluate, and mitigate risks, ii) ensure evidence of compliance with training, validation, and testing data, and iii) verify and document the AI system design and quality. Current research mainly addresses conceptual considerations and framework designs for AI risk assessment and auditing processes. However, it often overlooks practical tools that actively involve and support humans in checking and documenting high-risk or general-purpose AI systems. This paper addresses this gap by proposing requirements derived from legal regulations and a generic design and architecture of a QMS for AI systems verification and documentation. A first version of a prototype QMS is implemented, integrating LLMs as examples of AI systems and focusing on an integrated risk management sub-service. The prototype is evaluated on i) a user story-based qualitative requirements assessment using potential stakeholder scenarios and ii) a technical assessment of the required GPU storage and performance.
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.34)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- (2 more...)
The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: What can they learn from each other?
Mokander, Jakob, Juneja, Prathm, Watson, David, Floridi, Luciano
On the whole, the U.S. Algorithmic Accountability Act of 2022 (US AAA) is a pragmatic approach to balancing the benefits and risks of automated decision systems. Yet there is still room for improvement. This commentary highlights how the US AAA can both inform and learn from the European Artificial Intelligence Act (EU AIA).
- North America > United States (1.00)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Italy > Emilia-Romagna > Metropolitan City of Bologna > Bologna (0.05)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
5 Best Practices for Testing AI Applications
In light of the April 2021 announcement of the world's first legislative framework for regulating Artificial Intelligence (AI), the European Artificial Intelligence Act (EU AIA), now is an opportune time for developers to revisit their strategies for testing AI applications. Incoming regulations mean that the group of stakeholders who care about your testing results just got bigger and more involved. The stakes are high, not least because companies that violate the terms of the legislation could face fines higher than those levied under the General Data Protection Act (GDPR). For the purpose of transparency, certain types of AI also have to make their accuracy metrics available to users, which adds to the pressure to get functional testing right. Following on from Applause's step-by-step guide to training and testing your AI algorithm, this article summarizes how developers should be testing AI applications in anticipation of the new era of AI regulations.
- Europe (0.32)
- South America > Chile (0.05)
- Asia > India (0.05)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)